11 research outputs found
Recommended from our members
Belief-Space Planning for Resourceful Manipulation and Mobility
Robots are increasingly expected to work in partially observable and unstructured environments. They need to select actions that exploit perceptual and motor resourcefulness to manage uncertainty based on the demands of the task and environment. The research in this dissertation makes two primary contributions. First, it develops a new concept in resourceful robot platforms called the UMass uBot and introduces the sixth and seventh in the uBot series. uBot-6 introduces multiple postural configurations that enable different modes of mobility and manipulation to meet the needs of a wide variety of tasks and environmental constraints. uBot-7 extends this with the use of series elastic actuators (SEAs) to improve manipulation capabilities and support safer operation around humans. The resourcefulness of these robots is complemented with a belief-space planning framework that enables task-driven action selection in the context of the partially observable environment. The framework uses a compact but expressive state representation based on object models. We extend an existing affordance-based object model, called an aspect transition graph (ATG), with geometric information. This enables object-centric modeling of features and actions, making the model much more expressive without increasing the complexity. A novel task representation enables the belief-space planner to perform general object-centric tasks ranging from recognition to manipulation of objects. The approach supports the efficient handling of multi-object scenes. The combination of the physical platform and the planning framework are evaluated in two novel, challenging, partially observable planning domains. The ARcube domain provides a large population of objects that are highly ambiguous. Objects can only be differentiated using multi-modal sensor information and manual interactions. In the dexterous mobility domain, a robot can employ multiple mobility modes to complete navigation tasks under a variety of possible environment constraints. The performance of the proposed approach is evaluated using experiments in simulation and on a real robot
Path Planning for Dexterous Mobility
In order to overcome a large variety of run-time constraints, robots are being designed to be more resourceful by incorporating more sensory and motor options for any given task. The added flexibility provides a basis for dexterous problem solving, but challenges planners by increasing the complexity of search. Moreover, the cost of functionally equivalent options can vary dramatically. In the worst case, naive approaches to planning avoid expensive actions until inexpensive options are explored exhaustively leading to poor overall search performance. We present a dexterous robot that introduces multiple types of locomotor actions with significant differences in cost and situational value and apply standard search techniques to demonstrate the additional challenges that arise in the context of dexterous mobility. Results highlight incentives, opportunities, and impact for overcoming these challenges. Additionally, we present a prototype for a path planner that uses environmental features to define an efficient set of subgoals for dexterous motion planning
Postural Modes and Control for Dexterous Mobile Manipulation: the UMass uBot Concept
Abstract — We present the UMass uBot concept for dexterous mobile manipulation. The uBot concept is built around Bernstein’s definition of dexterity—“the ability to solve a motor problem correctly, quickly, rationally, and resourcefully ” [1]. We contend that dexterity in robotic platforms cannot arise from control alone and can only be achieved when the entire design of the robot affords resourceful behavior. uBot-6 is the latest robot in the uBot series whose design affords several postural configurations and mobility modes. We discuss these dexterous mobility options in detail and demonstrate the strength of dexterous mobility. I
Choosing Informative Actions for Manipulation Tasks
Abstract—Autonomous robots demand complex behavior to perform tasks in unstructured environments. In order to meet these expectations efficiently, it is necessary to organize knowledge of past interactions with the world in order to facilitate future tasks. With this goal in mind, we present a knowledge representation that makes explicit the invariant spatial relationships between sensorimotor features comprising a rigid body and uses them to reason about other tasks and run-time contexts. I